25 research outputs found

    Integration of Clouds to Industrial Communication Networks

    Get PDF
    Cloud computing, owing to its ubiquitousness, scalability and on-demand ac- cess, has transformed into many traditional sectors, such as telecommunication and manufacturing production. As the Fifth Generation Wireless Specifica- tions (5G) emerges, the demand on ubiquitous and re-configurable computing resources for handling tremendous traffic from omnipresent mobile devices has been put forward. And therein lies the adaption of cloud-native model in service delivery of telecommunication networks. However, it takes phased approaches to successfully transform the traditional Telco infrastructure to a softwarized model, especially for Radio Access Networks (RANs), which, as of now, mostly relies on purpose-built Digital Signal Processors (DSPs) for computing and processing tasks.On the other hand, Industry 4.0 is leading the digital transformation in manufacturing sectors, wherein the industrial networks is evolving towards wireless connectivity and the automation process managements are shifting to clouds. However, such integration may introduce unwanted disturbances to critical industrial automation processes. This leads to challenges to guaran- tee the performance of critical applications under the integration of different systems.In the work presented in this thesis, we mainly explore the feasibility of inte- grating wireless communication, industrial networks and cloud computing. We have mainly investigated the delay-inhibited challenges and the performance impacts of using cloud-native models for critical applications. We design a solution, targeting at diminishing the performance degradation caused by the integration of cloud computing

    Latency-aware Radio Resource Allocation over Cloud RAN for Industry 4.0

    Get PDF
    The notion of Cloud RAN is taking a prominent role in narrative for the next generation wireless infrastructure. It is also seen as a mean to industrial communication systems. In order to provide reliable wireless connectivity for industrial deployments, by conventional means, the cloud infrastructure needs to be reliable and incur little latency, which however, is contradictory to the stochastic nature of cloud infrastructures. In this paper, we investigate the impact of stochastic delay on a radio resource allocation process deployed in Cloud RAN. We proceed to propose a strategy for realizing timely cloud responses and then adapt that strategy to a radio resource allocation problem. Further, we evaluate the strategies in an industrial IoT scenario using a simulated environment. Experimentation shows that, with our proposed strategy, a significant performance improvement on timely responses can be achieved even with noisy cloud environment. Improvements in resource utilization can be also attained for a resource allocation process deployed over Cloud RAN with this strategy

    Is Cloud RAN a Feasible Option for Industrial Communication Network?

    Get PDF
    Cloud RAN (C-RAN) is a promising paradigm for the next generation radio access network infrastructure, which offers centralised and coordinated base-band signal processing in a cloud-based BBU pool. This requires extremely low latency responses to achieve real-time signal processing. In this paper, we analysed the challenges to introduce cloud native model for signal processing in C-RAN. We studied the difficulties of achieving real-time processing in a cloud infrastructure by addressing its latency-constraint. To evaluate the performance of such a system, we mainly investigated a massive MIMO pilot scheduling process in a C-RAN infrastructure under a factory automation scenario. We considered the stochastic delays incurred by the cloud execution environment as the main constraint that has has impact on the scheduling performance. We use simulations to provide insights on the feasibility of C-RAN deployment for industrial communication, which has stringent criteria to meet Industry 4.0 standards under this constraint. Our experiment results show that, concerning a pilot scheduling problem, the CRAN system is capable of meeting the industrial criteria when the fronthaul and the cloud execution environment has introduced latency in the order of milliseconds

    Punctual Cloud : Unbinding Real-time Applications from Cloud-induced Delays

    Get PDF
    Cloud computing has become a prominent technology for the computing paradigm in various industrial sectors nowadays. For most industrial applications to perform in real-time, the support of periodic computing is required. However, it remains a challenge when the computing is executed in a cloud, since both the network connection and the cloud environment are uncertain. In this paper, we propose a new architecture to deploy real-time applications in the cloud. We call it punctual cloud. We detail the implementation and demonstrate how punctual cloud is deployed in a cloud-native manner on Kubernetes. We evaluate the system's performance with a real-time resource allocation problem and show that, compared to a system without punctual cloud, which has maximum 40% punctual deliveries, our proposed architecture can attain over 90% responses to be delivered punctually for the application, while also being capable of remedying the performance degradation caused by long and uncertain response delays in the system

    Massive MIMO Pilot Scheduling over Cloud RAN

    Get PDF
    Cloud-RAN (C-RAN) is a promising paradigm for the next generation radio access network infrastructure, which offers centralized and coordinated base-band signal processing. On the other hand, this requires extremely low latency fronthaul links to achieve real-time centralized signal processing. In this paper, we investigate massive MIMO pilot scheduling in a C- RAN infrastructure. Three commonly used scheduling policies are investigated with simulations in order to provide insight on how the scheduling performance is affected by the latency incurred by the C-RAN infrastructure

    I329L protein-based indirect ELISA for detecting antibodies specific to African swine fever virus

    Get PDF
    African swine fever (ASF) is a disease that causes severe economic losses to the global porcine industry. As no vaccine or drug has been discovered for the prevention and control of ASF virus (ASFV), accurate diagnosis and timely eradication of infected animals are the primary measures, which necessitate accurate and effective detection methods. In this study, the truncated ASFV I329L (amino acids 70–237), was induced using IPTG and expressed in Escherichia coli cells. The highly antigenic viral protein I329L was used to develop an indirect enzyme-linked immunosorbent assay (iELISA), named I329L-ELISA, which cut-off value was 0.384. I329L-ELISA was used to detect 186 clinical pig serum samples, and the coincidence rate between the indirect ELISA developed here and the commercial kit was 96.77%. No cross-reactivity was observed with CSFV, PRRSV, PCV2, or PRV antibody-positive pig sera, indicating good specificity. Both intra- assay and inter-assay coefficients were below 10%, and the detection sensitivity of the iELISA reached 1:3200. In this study, an iELISA for ASFV antibody detection was developed based on the truncated ASFV I329L protein. Overall, the I329L-ELISA is a user-friendly detection tool that is suitable for ASFV antibody detection and epidemiological surveillance

    Data- and machine-learning-driven retrieval and imaging of seismic-reflection data in complex media

    No full text
    The first study in this thesis is designed to cover both approaches of Marchenko redatuming and Marchenko-based primary estimation in a range of subsampling scenarios, providing insights that can inform acquisition sampling choices as well as processing parameterization and quality control, e.g., to set up appropriate data filters and scaling to accommodate the effects of dipole fields, or to help ensuring that the data interpolation achieves the desired levels of reconstruction quality that minimize subsampling artifacts in Marchenko-derived fields and images. With the aim of handling short-period multiples in laterally-varying media with complex, realistic layering, a updated augmented Marchenko approach is proposed in this thesis providing reliable approximations to the problem. This is achieved through combining an adapted 1.5D approach and additional post-processing with the recently proposed Marchenko-type demultiple formula. The new approach and its limitations are demonstrated with a range of numerical models that capture varying degrees of lateral heterogeneity in the overburden. The benchmark of migrated images after removing short-period multiples suggests promising applications to field data. As a next step in the reflection-data processing chain, the migrated images are nonetheless blurred with uneven amplitudes and low resolution, despite a successful multiples removal procedure. As an important post-processing step, seismic image deblurring aims to enhance both the resolution and amplitude balance. Conventional methods developed for such an inverse problem are usually cumbersome and slowly converging, and require case-dependent user interference, e.g. in the form preconditioning, and the fine-tuning of free parameters for regularization. This thesis proposes instead to address the problem with a physics-based Machine Learning technique --- the (invertible) Recurrent Inference Machine, by explicitly employing the point spread function as the forward operator and thus prior information. With a simple almost-flat synthetic training model and very cheap computational costs, this new approach outperforms the benchmark of the invertible UNet - a commonly-used convolutional neural network architecture - in a series of tests, designed with both complex synthetic models and the field data application. Different from the other methods, the neural networks in this study are trained to map the migrated image into an impedance perturbation model, rather than the reflectivity model, making it closer to the next-level application of the impedance inversion for reservoir characterisation and time-lapse monitoring. In the last part of the thesis, a general and unified framework is proposed, which flexibly incorporates all kinds of interface conditions for any type of wave equations. With the new approach, the implementation of a wave equation solver can be benchmarked by comparing different angles of plane wave simulations to transmission/reflection coefficients from plane-wave analysis with exact boundary conditions. This is illustrated with the case of poroelastic wave equation, which has a wide application in rock physics and reservoir monitoring

    Data- and machine-learning-driven retrieval and imaging of seismic-reflection data in complex media

    No full text
    The first study in this thesis is designed to cover both approaches of Marchenko redatuming and Marchenko-based primary estimation in a range of subsampling scenarios, providing insights that can inform acquisition sampling choices as well as processing parameterization and quality control, e.g., to set up appropriate data filters and scaling to accommodate the effects of dipole fields, or to help ensuring that the data interpolation achieves the desired levels of reconstruction quality that minimize subsampling artifacts in Marchenko-derived fields and images. With the aim of handling short-period multiples in laterally-varying media with complex, realistic layering, a updated augmented Marchenko approach is proposed in this thesis providing reliable approximations to the problem. This is achieved through combining an adapted 1.5D approach and additional post-processing with the recently proposed Marchenko-type demultiple formula. The new approach and its limitations are demonstrated with a range of numerical models that capture varying degrees of lateral heterogeneity in the overburden. The benchmark of migrated images after removing short-period multiples suggests promising applications to field data. As a next step in the reflection-data processing chain, the migrated images are nonetheless blurred with uneven amplitudes and low resolution, despite a successful multiples removal procedure. As an important post-processing step, seismic image deblurring aims to enhance both the resolution and amplitude balance. Conventional methods developed for such an inverse problem are usually cumbersome and slowly converging, and require case-dependent user interference, e.g. in the form preconditioning, and the fine-tuning of free parameters for regularization. This thesis proposes instead to address the problem with a physics-based Machine Learning technique --- the (invertible) Recurrent Inference Machine, by explicitly employing the point spread function as the forward operator and thus prior information. With a simple almost-flat synthetic training model and very cheap computational costs, this new approach outperforms the benchmark of the invertible UNet - a commonly-used convolutional neural network architecture - in a series of tests, designed with both complex synthetic models and the field data application. Different from the other methods, the neural networks in this study are trained to map the migrated image into an impedance perturbation model, rather than the reflectivity model, making it closer to the next-level application of the impedance inversion for reservoir characterisation and time-lapse monitoring. In the last part of the thesis, a general and unified framework is proposed, which flexibly incorporates all kinds of interface conditions for any type of wave equations. With the new approach, the implementation of a wave equation solver can be benchmarked by comparing different angles of plane wave simulations to transmission/reflection coefficients from plane-wave analysis with exact boundary conditions. This is illustrated with the case of poroelastic wave equation, which has a wide application in rock physics and reservoir monitoring

    Latency prediction in 5G for control with deadtime compensation

    No full text
    With the promise of increased responsiveness and robustness of the emerging 5G technology, it is suddenly becoming feasible to deploy latency-sensitive control systems over the cloud via a mobile network. Even though 5G is herald to give lower latency and jitter than current mobile networks, the effect of the delay would still be non-negligible for certain applications.In this paper we explore and demonstrate the possibility of compensating for the unknown and time-varying latency introduced by a 5G mobile network for control of a latency-sensitive plant. We show that the latency from a prototype 5G test bed lacks significant short-term correlation, making accurate latency prediction a difficult task. Further, because of the unknown and time-varying latency our used simple interpolation-based model experiences some troubling theoretical properties, limiting its usability in real world environments. Despite this, we give a demonstration of the strategy which seems to increase robustness in a simulated plant

    Data-driven suppression of short-period multiples from laterally varying thin-layered overburden structures

    No full text
    Marchenko multiple elimination methods remove all orders of overburden-generated internal multiples in a data-driven way. In the presence of thin beds, however, these methods have been shown to underperform. This is because the underlying inverse problem requires the information about short-period internal multiple (SPIM) imprint on the inverse transmission to be correctly constrained. This has been addressed in 1.5D media with energy conservation and minimum-phase reconstruction. Extending the applications to two dimensions and, hence, making the step toward field data were believed to be hampered by the need for a multidimensional minimum-phase reconstruction which is (1) not unique and (2) no algorithm has been found to perform this in practice on band-limited data. Here, we address both of these problems with an approach that includes solving the Marchenko equation with a trivial constraint, evaluating the energy conservation condition of its solutions to find the spatially dependent error syndrome, using the 1.5D minimum-phase reconstruction for each shot gather to find the spatially dependent constraint, and finally using that inside another run of the Marchenko equation solver to find a much-improved result. We find that the method works because in 2D media the expression of SPIMs in the inverse transmission coda is approximately 1.5D. We then investigate a class of models and synthetic data sets to verify where the 1.5D approximation starts breaking down. Our analysis indicates that this approach could perform very well in settings with moderate lateral variations, which also is where the (short-period) internal multiples are most difficult to differentiate from primary reflections.ISSN:0016-8033ISSN:1942-215
    corecore